Skip to content
Menu
Menu
1280x250

Microsoft Releases 2026 Data Security Index Detailing AI Security And Governance Risks

The report documents how generative AI use is driving new data security exposures and how organizations are adopting governance and control frameworks to manage them.

Key takeaways

  • 32% of enterprise data security incidents involve generative AI tools.
  • More than 70% of knowledge workers use AI tools at work, often outside formal controls.
  • 47% of organizations have implemented GenAI-specific security and governance measures.
  • 82% plan to use GenAI for data security operations, including monitoring, classification, and investigations.
  • 39% already use GenAI agents for security, with 58% piloting or evaluating them.

Microsoft’s 2026 Data Security Index reports that the widespread use of generative AI in enterprises is creating new data security and governance challenges that many organizations continue to struggle to manage.

The report is based on a survey of 1,725 data security and information technology decision-makers across 10 countries, conducted between July 16 and Aug. 11, 2025, and on interviews with security leaders in the United States and the United Kingdom.

Microsoft found that 32% of reported enterprise data security incidents now involve GenAI tools, including employees pasting sensitive data into chatbots, uploading files into AI applications, or using consumer AI services outside corporate controls. The report states that more than 70% of knowledge workers are bringing AI tools into the workplace, often using personal accounts or unmanaged devices.

 

According to the report, this behavior limits organizations’ ability to track data flows, apply retention rules, or prevent regulated information from being exposed. “As GenAI becomes embedded in daily operations, organizations must balance the drive for productivity with robust governance and control,” Microsoft said in the report.

The Index indicates that enterprises are expanding formal AI governance and security programs. 47% of organizations now have GenAI-specific security controls, up from 39% the previous year. These controls include blocking uploads of sensitive data to AI tools, enforcing corporate identity for AI access, and requiring employee training on the approved use of AI.

The report also finds that 82% of organizations plan to use GenAI in their data security operations, up from 64% in 2024. Common uses include detecting data loss risks, classifying sensitive information, monitoring AI-driven data flows, and accelerating incident investigations.

Microsoft reported that companies are increasingly deploying GenAI agents that can analyze data movement, apply protection policies, and respond to security alerts. 39% of organizations already use GenAI agents in security programs, and another 58% are piloting or evaluating them, according to the Index. The report notes that most organizations still require human review of AI-generated decisions affecting data access and protection.

The Index also shows that organizations are consolidating security and governance functions. 86% of respondents said integrated data security platforms provide stronger protection and governance than disconnected tools, allowing companies to apply consistent rules across cloud systems, collaboration platforms, and AI applications.

Microsoft said that the rapid growth of AI-generated content, automated agents, and cross-platform data sharing is making traditional perimeter-based security models less effective. The report emphasizes the need for continuous data monitoring, identity-based controls, and centralized policy enforcement as AI becomes part of everyday business operations.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.