A new report finds that 83% of firms are using AI, while only 13% have strong data-usage visibility.
Key Takeaways
- 83% of enterprises report using AI in daily operations; only 13% report having good visibility into how AI handles their data.
- Autonomous AI agents and external prompts to public LLMs are seen as the riskiest AI vectors, with 76% and 70% of organizations, respectively, identifying them as the most complex to secure.
- Only 7% of organizations have a dedicated AI governance team, and only 11% feel fully prepared for forthcoming AI data governance regulations.
Cyera Research Labs, in collaboration with Cybersecurity Insiders, published the 2025 State of AI Data Security Report, based on a survey of 921 enterprise IT and cybersecurity professionals.
The report paints a stark picture: While 83% of organizations have integrated AI into daily operations, the infrastructure for governing AI data use remains weak and fragmented. Only 13% of businesses report strong visibility into how AI interacts with their sensitive data, raising concerns about risk, oversight, and regulatory compliance.
Key Findings
- AI adoption outpaces control:
- While 83% of organizations report using AI, only 28% describe their use as extensive; more than half (55%) are still in the pilot stage.
- Despite widespread use, only 13% of companies say they have “good” or “full” visibility into AI activity; nearly half (49%) admit to little or no visibility.
- Logging is mostly reactive: a third of respondents review activity only after an incident occurs.
- Only 11% of organizations can automatically block risky AI behavior; more than half lack any automated safeguards. 57% cannot block or restrict risky AI activities.
- Governance and regulatory readiness remain weak:
- Only 7% of organizations report having a dedicated AI-governance team.
- Only 11% report feeling fully prepared for emerging AI data-governance regulations; 44% are partially prepared, and 31% are aware but unprepared.
- Responsibility for AI governance is highly fragmented: many place it with IT, risk, or the C-suite, while 12% assign ownership to their CISO.
- Agents and prompts — the exposed edge:
- Autonomous AI agents are considered the hardest to secure by 76% of respondents, and 70% cite external prompts to public LLMs as high risk.
- Embedded AI in SaaS is more trusted, but 43% still find it challenging to secure.
- 40% of organizations report the presence of unsanctioned or “shadow AI” operating outside official oversight.
- Around 21% grant AI broad, default access to sensitive data; two-thirds say they have observed AI over-accessing data it didn’t need.
- Nearly a quarter (23%) say they have no controls whatsoever over prompts or outputs; while filtering, monitoring, and redaction are inconsistently deployed.
- Despite these risks, 77% of security teams report using AI in operations, often without comparable control measures.
- Human-centric models are still applied to AI:
- Most organizations apply human-focused identity controls to AI, leading to over-permissioned access and limited governance.
- Only 16% treat AI as a distinct class, whereas 77% apply human rules or inconsistent controls.
- Broad data access is common: 21% grant AI default access to sensitive data, and 66% report AI accessing more data than intended.
- Few have integrated identity and data security for AI, with 23% reporting no formal governance for AI data access.
The report advances a compelling metaphor: in many organizations, AI is becoming a “shadow identity,” an autonomous actor within enterprise infrastructure that’s powerful, fast, and often unaccountable.
It also maps its findings against the OWASP Top 10 for LLM Applications, highlighting how enterprises are currently falling short of these guardrails.
The report issues a clear warning: As enterprises accelerate AI deployment (e.g., embedding tools such as SAS AI copilots, large language models, and autonomous agents) without proper visibility, monitoring, and governance, AI becomes a fast-growing vector for data exposure and regulatory risk.
AI isn’t just another technology. It’s a new reality that must be governed with the same, if not greater, rigor as other business risks if data integrity and compliance are to be maintained.

